Native Koreans’ perception of voicing in VC position: Prosodic restructuring effects on consonant identification
نویسندگان
چکیده
Many cross-language perceptual models consider allophonic distributions in predicting the pattern of cross-language perception. Allophonic processes, however, are related to not only the existence of particular phonetic events but also to their linkage to the particular context in which such phonetic events occur. This study investigated how the determination of context and allophonic variation interact in the perception of second language speech. Korean has a restriction against laryngeal contrasts in final position while such a restriction does not exist in English. When Korean learners of English acquire the contrast in final position, it is possible that they reparse the prosodic structure, placing the contrast in a different prosodic location such as in the prevocalic position of a perceptually epenthesized following syllable. To determine whether such prosodic reanalysis strategies might exist, 20 college-age inexperienced Korean learners of English were presented with American English labial and coronal obstruents in CV and VC nonsense words and were asked to perform two tasks: 1) consonant identification and 2) syllable counting. As reported in previous research (Lim, 2003), the results show that listeners often parsed the VC stimuli as two syllables. For the voiceless consonants, this reparsing afforded better accuracy in consonant identification, as expected. However, for voiced consonants, such a voicing benefit due to prosodic restructuring was not observed. This discrepancy between voiceless and voiced segments appears to be due to the fact that the intervocalic contrast in Korean fosters a bias toward voiceless category by placing the English final voiced consonants in the voiceless category. These results demonstrate that L2 learners may use prosodic restructuring, but such a strategy is not necessarily useful in the perception of second language speech.
منابع مشابه
Linguistic generalization in L 2 consonant identification accuracy : a preliminary report
Cross-language perception of phonetic features was investigated via an experiment in which native speakers of Korean and English identified speech sounds varying across voicing (voiced vs. voiceless), place of articulation (labial vs. coronal), and manner of articulation (stop vs. fricative) features as well as prosodic context (syllable initial vs. syllable final). Because Korean has no anteri...
متن کاملPerception of non-native phonemes in noise
We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification a...
متن کاملWord segmentation in Persian continuous speech using F0 contour
Word segmentation in continuous speech is a complex cognitive process. Previous research on spoken word segmentation has revealed that in fixed-stress languages, listeners use acoustic cues to stress to de-segment speech into words. It has been further assumed that stress in non-final or non-initial position hinders the demarcative function of this prosodic factor. In Persian, stress is retract...
متن کاملPerception of English syllable-final consonants by Chinese speakers and Japanese speakers
This study investigates perception by Mandarin Chinese and Japanese native speakers of English consonants in syllable-final position, and the effect of vowel duration as a cue to voicing in syllablefinal stops. Two experiments were conducted in the study and the results revealed that Mandarin Chinese speakers performed better than Japanese speakers but the position of the consonant in the sylla...
متن کاملContributions of temporal encodings of voicing, voicelessness, fundamental frequency, and amplitude variation to audio-visual and auditory speech perception.
Auditory and audio-visual speech perception was investigated using auditory signals of invariant spectral envelope that temporally encoded the presence of voiced and voiceless excitation, variations in amplitude envelope and F0. In experiment 1, the contribution of the timing of voicing was compared in consonant identification to the additional effects of variations in F0 and the amplitude of v...
متن کامل